Goto

Collaborating Authors

 internal representation


I don't see images in my head. Can training give me a mind's eye?

New Scientist

I don't see images in my head. Can training give me a mind's eye? Training programmes for people with aphantasia - the inability to create mental images - are challenging neuroscientists' understanding of how we create thoughts What do you see when you try to picture an apple? Last December, I closed my eyes and tried to visualise a potoo. This tropical bird has a "round, kind of pill-shaped head", my mental imagery coach described to me, and is covered with brown feathers. Its cartoonishly large mouth opens like a gaping smile to reveal a pink, fleshy colour, and its large irises can make its eyes seem entirely black.


How unconstrained machine-learning models learn physical symmetries

Domina, Michelangelo, Abbott, Joseph William, Pegolo, Paolo, Bigi, Filippo, Ceriotti, Michele

arXiv.org Machine Learning

The requirement of generating predictions that exactly fulfill the fundamental symmetry of the corresponding physical quantities has profoundly shaped the development of machine-learning models for physical simulations. In many cases, models are built using constrained mathematical forms that ensure that symmetries are enforced exactly. However, unconstrained models that do not obey rotational symmetries are often found to have competitive performance, and to be able to \emph{learn} to a high level of accuracy an approximate equivariant behavior with a simple data augmentation strategy. In this paper, we introduce rigorous metrics to measure the symmetry content of the learned representations in such models, and assess the accuracy by which the outputs fulfill the equivariant condition. We apply these metrics to two unconstrained, transformer-based models operating on decorated point clouds (a graph neural network for atomistic simulations and a PointNet-style architecture for particle physics) to investigate how symmetry information is processed across architectural layers and is learned during training. Based on these insights, we establish a rigorous framework for diagnosing spectral failure modes in ML models. Enabled by this analysis, we demonstrate that one can achieve superior stability and accuracy by strategically injecting the minimum required inductive biases, preserving the high expressivity and scalability of unconstrained architectures while guaranteeing physical fidelity.


Image-to-image translation for cross-domain disentanglement

Neural Information Processing Systems

Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In this paper, we bridge these two objectives and introduce the concept of cross-domain disentanglement. We aim to separate the internal representation into three parts. The shared part contains information for both domains.